Add persistent program cache for Program.compile#1912
Add persistent program cache for Program.compile#1912cpcloud wants to merge 28 commits intoNVIDIA:mainfrom
Conversation
de57bd8 to
ac38a68
Compare
|
f1ae40e to
b27ed2c
Compare
| Intentionally does NOT subclass ``ProgramCacheResource`` -- the wrapper | ||
| should be duck-typed, so we test the duck-typed surface directly. |
There was a problem hiding this comment.
Why don't we require each cache= instance to subclass from ProgramCacheResource?
There was a problem hiding this comment.
Done in 701d00b — Program.compile now isinstance-checks cache= against ProgramCacheResource and raises TypeError up front. The recording test cache subclasses the ABC; added a regression test that a duck-typed cache is rejected.
|
|
||
| cpdef bint _can_load_generated_ptx() except? -1: | ||
| """Check if the driver can load PTX generated by the current NVRTC version.""" | ||
| def _can_load_generated_ptx(): |
There was a problem hiding this comment.
cpdef functions are also accessible from Python, is this change needed?
There was a problem hiding this comment.
Done in 821131a — restored cpdef bint _can_load_generated_ptx() except? -1. The two cache-hit warning tests now mock driver_version (to (0, 0, 0) for the warning case, (999, 0, 0) for the no-warning case) so the cpdef body computes the desired result without needing a Python-level test seam.
leofang
left a comment
There was a problem hiding this comment.
Review of the persistent program cache implementation. Findings are categorized inline as:
- Critical (1): Must fix before merging — cache-write failure drops a successfully compiled
ObjectCode. - Consideration (8): Performance/functionality concerns worth discussing; can be deferred but should be tracked.
- Nitpick (6): Not blockers for merging.
This excludes items already captured in Leo's and rwgk's earlier review comments (platformdirs removal, over-eviction race, source-directory include guard, ObjectCode._init in docstrings, cpdef→def change, duck-typed test, close() vs context manager, multi-GPU usage, star-import laziness, doc section placement, SQLiteProgramCache removal).
Overall this is a well-engineered piece of work — the TOCTOU handling, stat-guards, and atomic-write design are thorough and well-documented. The main concern is the cache-write failure path losing the compile result.
leofang
left a comment
There was a problem hiding this comment.
Follow-up batch — 6 additional inline comments from the first review round:
- Consideration (3): temp file burst-write thrashing, O(n)
_enforce_size_capon every write, UTF-8 decode introducing a new failure mode in the cache path. - Nitpick (3): stat-key triple dedup, Windows sharing-retry dedup,
_KeyBackendclass hierarchy vs simpler function dispatch.
| with contextlib.suppress(FileNotFoundError): | ||
| tmp_path.unlink() | ||
| raise | ||
| self._enforce_size_cap() |
There was a problem hiding this comment.
Consideration: _enforce_size_cap is O(n) on every __setitem__.
Every write stats all files in entries/ plus tmp/ to compute the total. For a large cache (thousands of entries), this could be measurably costly on every compile. An incremental size tracker (add on write, subtract on eviction, periodic reconciliation to correct drift from external deletions) would make writes O(1) in the common case, falling back to a full scan only when reconciliation is needed.
There was a problem hiding this comment.
Agreed — this is an optimization, not a correctness issue. The cache is bounded by max_size_bytes, so the typical case walks a few hundred cubins per write at microsecond stat cost. An incremental size tracker with periodic reconciliation is the right answer for very large caches; deferring to a follow-up.
Move the Program caches autosummary out of the trailing cuda.core.utils block and into a subsection of CUDA compilation toolchain so the cache classes sit next to Program/Linker. The subsection switches the current module to cuda.core.utils for the autosummary and switches back afterwards. make_program_cache_key moves with the cache classes.
The default close() on ProgramCacheResource is a no-op because the in-tree backends happen not to hold long-lived state. Future backends will -- file handles, sockets, db connections -- so the docstring now spells that out and tells callers to use the context manager or call close() explicitly so their code is portable across backends.
Two docstring examples (in ProgramCacheResource and make_program_cache_key) reached into the private cuda.core._module path and used the private ObjectCode._init constructor. Switch to the public from cuda.core import ObjectCode plus ObjectCode.from_cubin(...) so users learning the cache API don't get steered at private surface.
…source cache= used to accept any object with the get/__setitem__ duck-typed shape. Tighten the contract: the wrapper now isinstance-checks against ProgramCacheResource and raises TypeError up front when callers pass a plain dict-like. Subclasses get the get/update/close/__enter__/__exit__ defaults from the ABC and a portable interface across backends. Update the recording test cache to subclass the ABC and add a regression test that a duck-typed cache is rejected with TypeError.
The PTX cache-hit warning tests used to monkeypatch _can_load_generated_ptx itself, which forced the helper to a plain def so Cython would not early-bind the in-module call past the patch. Restore it to cpdef bint ... except? -1 and instead pin driver_version() in the test to (0, 0, 0) (warning expected) or (999, 0, 0) (no warning) so the cpdef body computes the desired result. Cython compiles each global lookup inside the helper as a fresh module-globals fetch, so swapping driver_version on the module object is enough to steer the comparison.
_LinkerBackend.validate, option_fingerprint, and hash_version_probe each re-probed _decide_nvjitlink_or_driver(), so a flapping probe could mint a key whose option fingerprint and version probe disagreed on which linker is in use. Cache the decision (and any probe exception) on a per-instance basis, instantiate _BACKENDS_BY_CODE_TYPE entries fresh per make_program_cache_key call so the cache lives exactly one call, and thread the decision into _linker_backend_and_version() instead of letting it probe a third time. Tests that monkeypatched _linker_backend_and_version now accept the extra use_driver argument (or *args/**kwargs in the failure-path test).
code_type was already normalised at Program init, but target_type was
checked case-sensitively against {"ptx", "cubin", "ltoir"} in
Program_compile (so compile(target_type="PTX") used to raise) and the
cache key path inherited the same asymmetry from
make_program_cache_key. Lowercase target_type at the top of
Program.compile and at the entry to make_program_cache_key so callers
who pass "PTX" get the same dispatch and the same cache key as "ptx".
…IELD_GATES The names tuple and the gates dict had to be kept in sync by hand: a field added to one but forgotten in the other would silently slip out of the PTX fingerprint. Drop the tuple and iterate the dict directly, so the dict is the single source of truth for which ProgramOptions fields perturb a PTX cache key.
_extract_bytes used to let a bare FileNotFoundError bubble up from Path(code).read_bytes(), so cache[key] = obj failures pointed only at the missing path with no hint that the cache was reading a path-backed ObjectCode. Wrap the FileNotFoundError with a message that names both the cache operation and the missing file so debugging the case stays self-explanatory.
The reader test only checked that every read returned non-None; a half-written file with non-empty bytes would pass. Carry the seeded payload into the worker, count exact-byte mismatches, and require zero. The eviction-race test wrote a final uncontested cache[key] = payload + b"final" after the churner exited, so the post-race endswith assertion would pass even with a broken stat-guard. Drop the final write and assert the entry survives carrying a rewriter payload prefix -- if the stat-guarded eviction path is broken, the in-race write is the one that vanishes.
… overwrite test InMemoryProgramCache claims thread-safety via the RLock that wraps every method but had no concurrent-thread coverage. Add a stress test with 4 writers + 4 readers x 200 ops against a size-capped cache that verifies no exceptions, no deadlocks (RLock reentrance through __setitem__ -> _evict_to_caps -> popitem), and that internal accounting (_total_bytes, len(_entries), len(cache)) stays consistent under contention. FileStreamProgramCache had no overwrite test analogous to test_inmemory_cache_overwrite_replaces_value_and_updates_size. Add one that writes a key twice and asserts the second value reads back, len stays at 1, and exactly one entry file lives on disk -- so a leaked entry from a botched os.replace would surface here.
max_size_bytes=0 used to slip past the >=0 guard but turned the cache into a black hole: every write was immediately evicted on its own size-cap pass. There is no legitimate use for that, so tighten the guard to >0 (or None for unbounded) and update both backends and the matching tests.
The (st_ino, st_size, st_mtime_ns) triple was open-coded in _touch_atime (fd-based and path-based fallbacks), _prune_if_stat_unchanged, and _enforce_size_cap. Centralise the fingerprint as _stat_key(st) so all four readers compare the same fields and the invariant has one place to read.
Replace, stat-and-read, and unlink each carried their own copy of the _REPLACE_RETRY_DELAYS / sleep / try-op / PermissionError loop. Centralise the loop as _with_sharing_retry(op, on_exhausted=...) and let each caller plug in its own success-on-success and exhausted-budget behaviour. Net behaviour is unchanged (exhaustion semantics for each public helper are preserved via the on_exhausted callback).
FileStreamProgramCache shards cache files into entries/<digest[:2]>/<digest[2:]>, so the overwrite test's iterdir filter on entries/ saw only the digest-prefix subdir (no is_file() match) and reported 0 entries. Switch to rglob so the assertion counts actual entry files. CI from the previous push caught this.
Summary
Adds a persistent on-disk cache for
cuda.core.Program.compileoutputs. The high-level integration is one keyword onProgram.compile:A second invocation with the same inputs short-circuits the entire NVRTC compile —
cache.get(key)(one stat + one read) and anObjectCode._initfrom the bytes. NoProgram_compileis invoked. This is the fast path the cache exists to provide:Public API
Program.compile(target_type, *, cache=...)— convenience wrapper. Derives the key, returns a freshObjectCodeon hit, stores the compile output on miss.cuda.core.utils.ProgramCacheResource— abstract bytes-in / bytes-out interface for custom backends. Providesget,update(Mapping or pairs),clear, and the mapping mutators (__getitem__/__setitem__/__delitem__/__len__).__contains__is intentionally omitted:cache.get(key)is the recommended idiom because the two-callif key in cache: cache[key]pattern is racy across processes.cuda.core.utils.InMemoryProgramCache— single-process LRU onOrderedDict,threading.RLock, size-only cap. For "compile once, look up many" workflows that don't need persistence.cuda.core.utils.FileStreamProgramCache— directory of atomic per-entry files. Safe across processes viaos.replace+ Windows sharing-violation retries onos.replace/ read /unlink.cuda.core.utils.make_program_cache_key— escape hatch when the compile inputs require anextra_digest(include_path,pre_include,pch,use_pch,pch_dir, NVVMuse_libdevice=True, NVRTCoptions.namewith a directory component).Program.compile(cache=...)rejects those compiles with aValueErrorpointing here.On-disk format
Each entry is the raw compiled binary verbatim — cubin / PTX / LTO-IR — with no pickle, JSON, length prefix, or framing of any kind. Cache files are directly consumable by external NVIDIA tools (
cuobjdump,nvdisasm,cuda-gdb).ObjectCode.symbol_mappingfromname_expressionsis not preserved across a cache round-trip; the wrapper rejectsProgram.compile(name_expressions=..., cache=...)outright so the first-call-works/second-call-breaks footgun can't surface. Callers that needget_kernel(name_expression)should compile withoutcache=.FileStreamProgramCache
tmp/, fsync,os.replaceintoentries/<2char>/<hash>. Concurrent readers never observe partial writes. Windowsos.replaceretries onERROR_ACCESS_DENIED/ERROR_SHARING_VIOLATION/ERROR_LOCK_VIOLATION(winerrors 5/32/33) within a bounded backoff (~185 ms); after the budget, the write is dropped and the next call recompiles. The same retry covers reads andpath.unlinkso eviction doesn't crash the writer that triggered it on win-64._is_windows_sharing_violation(exc)filtersEACCESonly whenwinerroris absent — non-sharing winerrors are real config errors and propagate. Off-WindowsPermissionErroralways propagates.cache[key] = value(andcache.update({key: value, ...})) accept raw bytes, bytearray, memoryview, or anyObjectCode(path-backed too — the file is read at write time so the cached entry is the binary content, not a path that could move). Reads return the same bytes that went in.max_size_bytesis the only knob — no element-count cap.Nonemeans unbounded.os.utime(fd-based on Linux/macOS viaos.supports_fd, path-based on Windows) to bumpst_atimeregardless of mount options orNtfsDisableLastAccessUpdate. Eviction sorts by oldestst_atimefirst. The atime touch is stat-guarded so a racing rewriter's freshly-replaced file never has its mtime rolled back.clear(),_enforce_size_cap(), and the atime touch all snapshot(ino, size, mtime_ns)per entry and refuse to unlink / overwrite stamps if a writer replaced the file mid-operation.make_program_cache_key): a backend-strategy pattern with one class percode_type(_NvrtcBackend/_LinkerBackend/_NvvmBackend). Each owns its own validate / encode_code / option_fingerprint / encode_name_expressions / hash_version_probe / hash_extra_payload. The orchestrator validatescode_type/target_type, dispatches to the right backend, and assembles the digest in fixed order. Adding a new backend is one new class, not a five-place edit.options.namewith a directory component: rejected withoutextra_digestbecause NVRTC resolves quoted#includedirectives relative to that directory — neighbour-header changes wouldn't invalidate the cache otherwise.RuntimeWarningthe uncached path emits — loadability depends on the driver, not on whether the bytes were freshly compiled.pathis omitted, resolves viaplatformdirs.user_cache_path("cuda-python", appauthor=False, opinion=False) / "program-cache":\$XDG_CACHE_HOME/cuda-python/program-cache(default~/.cache/cuda-python/program-cache)~/Library/Caches/cuda-python/program-cache%LOCALAPPDATA%\\cuda-python\\program-cachetmp/self-heal: if something deletestmp/after the cache is opened, the next write recreates it rather than crashing withFileNotFoundError.Test plan
tests/test_program_cache.py— abstract-class contract,updateaccepts mapping or pairs, transparent input-form equivalence (bytes / bytearray / memoryview / bytes-backedObjectCode/ path-backedObjectCodeall round-trip to the same on-disk bytes),make_program_cache_keysemantics (deterministic, supported-target matrix mirrorsProgram.compile, backend probe failures fail closed but stable, env-version changes don't perturb the key on the wrong backends, options-fingerprint canonicalization for the linker path, side-effect / external-content / NVRTCoptions.name-dir-component guards, schema version mixing), filestream CRUD, atomic-write race coverage, stat-guarded prune / atime-touch / clear / size-cap, atime LRU promotes recently-read, default-dir usesplatformdirs,_is_windows_sharing_violationpredicate's truth table including the regression case (non-sharing winerror plus EACCES propagates),tmp/recreation after external wipe.tests/test_program_cache_multiprocess.py— concurrent writers same key, distinct keys, reader-vs-writer torn-file safety, size-cap eviction race (rewriter vs. churner) under stat-guarded eviction.tests/test_program_compile_cache.py—Program.compile(cache=...)miss/hit/error paths against a recording stub,name_expressionsrejection,extra_digest-required / side-effect / NVRTCoptions.name-dir-component rejection, PTX loadability warning on cache hit (positive + negative), real-NVRTC end-to-end roundtrip across reopen.